52 research outputs found

    ENHANCE (ENriching Health data by ANnotations of Crowd and Experts): A case study for skin lesion classification

    Get PDF
    We present ENHANCE, an open dataset with multiple annotations to complement the existing ISIC and PH2 skin lesion classification datasets. This dataset contains annotations of visual ABC (asymmetry, border, colour) features from non-expert annotation sources: undergraduate students, crowd workers from Amazon MTurk and classic image processing algorithms. In this paper we first analyse the correlations between the annotations and the diagnostic label of the lesion, as well as study the agreement between different annotation sources. Overall we find weak correlations of non-expert annotations with the diagnostic label, and low agreement between different annotation sources. We then study multi-task learning (MTL) with the annotations as additional labels, and show that non-expert annotations can improve (ensembles of) state-of-the-art convolutional neural networks via MTL. We hope that our dataset can be used in further research into multiple annotations and/or MTL. All data and models are available on Github: https://github.com/raumannsr/ENHANCE

    Parameter Optimization for Image Denoising Based on Block Matching and 3D Collaborative Filtering

    Get PDF
    Clinical MRI images are generally corrupted by random noise during acquisition with blurred subtle structure features. Many denoising methods have been proposed to remove noise from corrupted images at the expense of distorted structure features. Therefore, there is always compromise between removing noise and preserving structure information for denoising methods. For a specific denoising method, it is crucial to tune it so that the best tradeoff can be obtained. In this paper, we define several cost functions to assess the quality of noise removal and that of structure information preserved in the denoised image. Strength Pareto Evolutionary Algorithm 2 (SPEA2) is utilized to simultaneously optimize the cost functions by modifying parameters associated with the denoising methods. The effectiveness of the algorithm is demonstrated by applying the proposed optimization procedure to enhance the image denoising results using block matching and 3D collaborative filtering. Experimental results show that the proposed optimization algorithm can significantly improve the performance of image denoising methods in terms of noise removal and structure information preservation

    Hybrid Committee Classifier for a Computerized Colonic Polyp Detection System

    Get PDF
    We present a hybrid committee classifier for computer-aided detection (CAD) of colonic polyps in CT colonography (CTC). The classifier involved an ensemble of support vector machines (SVM) and neural networks (NN) for classification, a progressive search algorithm for selecting a set of features used by the SVMs and a floating search algorithm for selecting features used by the NNs. A total of 102 quantitative features were calculated for each polyp candidate found by a prototype CAD system. 3 features were selected for each of 7 SVM classifiers which were then combined to form a committee of SVMs classifier. Similarly, features (numbers varied from 10-20) were selected for 11 NN classifiers which were again combined to form a NN committee classifier. Finally, a hybrid committee classifier was defined by combining the outputs of both the SVM and NN committees. The method was tested on CTC scans (supine and prone views) of 29 patients, in terms of the partial area under a free response receiving operation characteristic (FROC) curve (AUC). Our results showed that the hybrid committee classifier performed the best for the prone scans and was comparable to other classifiers for the supine scans

    Direct Classification of Type 2 Diabetes From Retinal Fundus Images in a Population-based Sample From The Maastricht Study

    Get PDF
    Type 2 Diabetes (T2D) is a chronic metabolic disorder that can lead to blindness and cardiovascular disease. Information about early stage T2D might be present in retinal fundus images, but to what extent these images can be used for a screening setting is still unknown. In this study, deep neural networks were employed to differentiate between fundus images from individuals with and without T2D. We investigated three methods to achieve high classification performance, measured by the area under the receiver operating curve (ROC-AUC). A multi-target learning approach to simultaneously output retinal biomarkers as well as T2D works best (AUC = 0.746 [±\pm0.001]). Furthermore, the classification performance can be improved when images with high prediction uncertainty are referred to a specialist. We also show that the combination of images of the left and right eye per individual can further improve the classification performance (AUC = 0.758 [±\pm0.003]), using a simple averaging approach. The results are promising, suggesting the feasibility of screening for T2D from retinal fundus images.Comment: to be published in the proceeding of SPIE - Medical Imaging 2020, 6 pages, 1 figur

    Assessment of algorithms for mitosis detection in breast cancer histopathology images

    Get PDF
    The proliferative activity of breast tumors, which is routinely estimated by counting of mitotic figures in hematoxylin and eosin stained histology sections, is considered to be one of the most important prognostic markers. However, mitosis counting is laborious, subjective and may suffer from low inter-observer agreement. With the wider acceptance of whole slide images in pathology labs, automatic image analysis has been proposed as a potential solution for these issues. In this paper, the results from the Assessment of Mitosis Detection Algorithms 2013 (AMIDA13) challenge are described. The challenge was based on a data set consisting of 12 training and 11 testing subjects, with more than one thousand annotated mitotic figures by multiple observers. Short descriptions and results from the evaluation of eleven methods are presented. The top performing method has an error rate that is comparable to the inter-observer agreement among pathologists

    Cross-scanner and cross-protocol multi-shell diffusion MRI data harmonization: algorithms and result

    Get PDF
    Cross-scanner and cross-protocol variability of diffusion magnetic resonance imaging (dMRI) data are known to be major obstacles in multi-site clinical studies since they limit the ability to aggregate dMRI data and derived measures. Computational algorithms that harmonize the data and minimize such variability are critical to reliably combine datasets acquired from different scanners and/or protocols, thus improving the statistical power and sensitivity of multi-site studies. Different computational approaches have been proposed to harmonize diffusion MRI data or remove scanner-specific differences. To date, these methods have mostly been developed for or evaluated on single b-value diffusion MRI data. In this work, we present the evaluation results of 19 algorithms that are developed to harmonize the cross-scanner and cross-protocol variability of multi-shell diffusion MRI using a benchmark database. The proposed algorithms rely on various signal representation approaches and computational tools, such as rotational invariant spherical harmonics, deep neural networks and hybrid biophysical and statistical approaches. The benchmark database consists of data acquired from the same subjects on two scanners with different maximum gradient strength (80 and 300 ​mT/m) and with two protocols. We evaluated the performance of these algorithms for mapping multi-shell diffusion MRI data across scanners and across protocols using several state-of-the-art imaging measures. The results show that data harmonization algorithms can reduce the cross-scanner and cross-protocol variabilities to a similar level as scan-rescan variability using the same scanner and protocol. In particular, the LinearRISH algorithm based on adaptive linear mapping of rotational invariant spherical harmonics features yields the lowest variability for our data in predicting the fractional anisotropy (FA), mean diffusivity (MD), mean kurtosis (MK) and the rotationally invariant spherical harmonic (RISH) features. But other algorithms, such as DIAMOND, SHResNet, DIQT, CMResNet show further improvement in harmonizing the return-to-origin probability (RTOP). The performance of different approaches provides useful guidelines on data harmonization in future multi-site studies

    Crowd disagreement of medical images is informative

    No full text
    Classifiers for medical image analysis are often trained with a single consensus label, based on combining labels given by experts or crowds. However, disagreement between annotators may be informative, and thus removing it may not be the best strategy. As a proof of concept, we predict whether a skin lesion from the ISIC 2017 dataset is a melanoma or not, based on crowd annotations of visual characteristics of that lesion. We compare using the mean annotations, illustrating consensus, to standard deviations and other distribution moments, illustrating disagreement. We show that the mean annotations perform best, but that the disagreement measures are still informative. We also make the crowd annotations used in this paper available at https://figshare.com/s/5cbbce14647b66286544

    Crowd disagreement about medical images is informative

    No full text
    Classifiers for medical image analysis are often trained with a single consensus label, based on combining labels given by experts or crowds. However, disagreement between annotators may be informative, and thus removing it may not be the best strategy. As a proof of concept, we predict whether a skin lesion from the ISIC 2017 dataset is a melanoma or not, based on crowd annotations of visual characteristics of that lesion. We compare using the mean annotations, illustrating consensus, to standard deviations and other distribution moments, illustrating disagreement. We show that the mean annotations perform best, but that the disagreement measures are still informative. We also make the crowd annotations used in this paper available at https://figshare.com/s/5cbbce14647b66286544

    Supervised local error estimation for nonlinear image registration using convolutional neural networks

    No full text
    Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.</p

    Not-so-supervised : A survey of semi-supervised, multi-instance, and transfer learning in medical image analysis

    No full text
    Machine learning (ML) algorithms have made a tremendous impact in the field of medical imaging. While medical imaging datasets have been growing in size, a challenge for supervised ML algorithms that is frequently mentioned is the lack of annotated data. As a result, various methods that can learn with less/other types of supervision, have been proposed. We give an overview of semi-supervised, multiple instance, and transfer learning in medical imaging, both in diagnosis or segmentation tasks. We also discuss connections between these learning scenarios, and opportunities for future research. A dataset with the details of the surveyed papers is available via https://figshare.com/articles/Database_of_surveyed_literature_in_Not-so-supervised_a_survey_of_semi-supervised_multi-instance_and_transfer_learning_in_medical_image_analysis_/7479416
    corecore